21 July 2022 | Techdirt

2022-07-22 18:26:12 By : Ms. Ella Tu

It’s hardly a secret that upload filters don’t work well. Back in 2017, Felix Reda, then Shadow Rapporteur on the EU Copyright Directive in the European Parliament, put together a representative sample of the many different ways in which filters fail. A recent series of tweets by Markus Pössel, Senior Outreach Scientist at the Max Planck Institute for Astronomy, exposes rather well the key issues, which have not improved since then.

Facebook muted 41 seconds of a video he uploaded to Facebook because Universal Music Group (UMG) claimed to own the copyright for some of the audio that was played. Since the music in question came from Bach’s Well-Tempered Clavier, and Bach died in 1750, there’s obviously no copyright claim on the music itself, which is definitely in the public domain. Instead, it seems, the claim was for the performance of this public domain music, which UMG says was played by Keith Jarrett, a jazz and classical pianist, and noted interpreter of Bach. Except that it wasn’t, as Pössel explains:

Either I am flattered that a Bach piece that I recorded with my own ten fingers on my digital keyboard sounds just like when Keith Jarrett is playing it. Or be annoyed by the fact that @UMG is *again* falsely claiming music on Facebook that they definitely do not own the copyright to.

This underlines the fact that upload filters may recognize the music – that’s not hard – but they are terrible at recognizing the performer of that music. It gets worse:

OK, I’ll go with “very annoyed” because if I then continue, Facebook @Meta DOES NOT EVEN GIVE ME THE OPTION TO COMPLAIN. They have grayed out the option to dispute the claim. They are dead wrong, but so sure of themselves that they do not even offer the option of disputing the claim, even though their system, in principle, provides such an option. And that, in a nutshell, is what’s wrong with companies like these today. Algorithms that make mistakes, biased towards big companies like @UMG.

This absurd situation is a foretaste of what is almost certainly going to happen all the time once major platforms are forced to use upload filters in the EU to comply with Article 17 of the Copyright Directive. Not only will they block legal material, but there will probably be a presumption that the algorithms must be right, so why bother complaining, when legislation tips the balance in favor of Big Content from the outset?

Follow me @glynmoody on Twitter, Diaspora, or Mastodon. Originally posted to WalledCulture.

Filed Under: copyright, copyright filters, counterclaim, counternotice, mistakes, public domain Companies: facebook, umg, universal music group

There’s surely some utility buried somewhere underneath the monumental piles of bullshit, hype, and outright scams. But with cryptocurrency values tanking and the public losing interest, the NFT craze appears to be retreating just as quickly as it arrived.

That’s bad news for the numerous companies that — after bumbling through the bureaucratic process of new product approval and design — are only just releasing their NFTs now. Reddit, for example, last week launched (to the collective sound of a million yawns) “collectible avatars.” Said avatars are on the blockchain and most definitely NFTs, but in its announcement Reddit avoids the word like the plague.

Data clearly indicates that gamers aren’t really interested in NFTs and blockchain games. That doesn’t seem to matter. It didn’t matter when Ubisoft jumped into NFTs and was greeted with a response ranging from disgust to total apathy. And it didn’t matter to GameStop, who thought last week would be the perfect time to launch a big new NFT marketplace to gain some relevance not reliant on r/WallStreetBets.

Meanwhile, Sony is planning to launch “digital collectibles” which are “3D rendered representations of things like figurines of video game characters and past Sony devices” that will be “ultra rare and hard to obtain.” The company had to go out of its way to tell the Washington Post these weren’t NFTs, which pretty much tells you everything you need to know about the state of NFTs:

“It’s definitely not NFTs. Definitely not. You can’t trade them or sell them. It is not leveraging any blockchain technologies and definitely not NFTs,” Chen said.

Reddit launched Collectible Avatars that are NFTs but it couldn't say the word.

Sony is adding Digital Collectibles but is saying essentially "oh helllllll no they're not NFTs or blockchain or any of that shit whatsoever"

in case you were wondering how it's going. https://t.co/UDTUAF9MZj

— Richard Lawler (@rjcc) July 14, 2022

Again, there’s are some interesting applications when it comes to digital collectibles, even those on the blockchain. But they’ve been hard to see because they’ve been absolutely buried by a parade of cult-like opportunistic scammers and bullshit artists. That’s forced companies looking to explore digital collectible innovation to highlight at the very top of their marketing pitches not what they are, but what they aren’t.

A semi-novel idea was basically imploded by opportunists, now everybody involved in anything even tangentially related will have to spend the next two years first clarifying what their new product isn’t. That’s an obstacle you could maybe overcome if your idea is truly transformative, but it’s a hard sell when your big idea basically involves tethering some half-assed gifs to the blockchain for no reason.

Filed Under: cryptocurrency, digital collectibles, gaming, nfts Companies: reddit, sony

Antonio García Martínez recently invited me on his podcast, The Pull Request. I was thrilled. Antonio is witty, charming, and intimidatingly brilliant (he was a PhD student in physics at Berkeley, and it shows). We did the episode, and we had a great time. But we never got to an important topic—Antonio’s take on free speech and the Internet.

In April, Antonio released a piece on his Substack, Freeze peach and the Internet, in which he asserts the existence of a “‘content moderation’ regime that is utterly re-defining speech in liberal societies.” That “regime” wants, Antonio contends, to “arbitrate truth and regulate online behavior for the sake of some supposed greater good.” It is opposed by those who still support freedom of speech. Antonio believes that the “regime” and its opponents are locked in an epic battle, and that we all must pick a side.

I’m not sure what to make of some of Antonio’s claims. We’re told, for instance, that “freedom of reach is freedom of speech”—which sounds like a nod to the New Left’s call, in the 1960s and 70s, to seize “the means of communication.” But then we’re told that “Twitter isn’t obligated to give you reach if user interest in your speech is low.” So Antonio is not demanding reach equality. “It’s simply not the case,” he says, “that freedom of speech is some legal binary switched between an abstract allow/not-allow state.” Maybe, then, the point is that we must think about the effects of algorithmic amplification. Who is ignoring or attacking that point, I do not know.

At any rate, a general critique of Antonio’s article this post is not.

In 1951 Willard Van Orman Quine, one of the great analytic philosophers of the twentieth century, wrote a short paper called “Two Dogmas of Empiricism.” Quine put to the torch two key assumptions made by the logical positivists, a philosophical school popular in the first half of the century. Antonio, in his piece, promotes two key assumptions commonly made by those who fear “Big Tech censorship.” If Mike Masnick can riff on Arrow’s impossibility theorem to explain why content moderation is so difficult, I figure I can riff on Quine’s “dogmas” paper to explore two ways in which the fears of online “censorship” by private platforms are overblown. As we’re about to see, in fact, Quine’s work can teach us something valuable about content moderation.

Antonio’s first dogma is the belief that either you’re for free speech, or you’re not—you’re for the censors and the would-be arbiters of truth. His second is the belief that Twitter is the “public square,” and that the state of the restrictions there is the proper gauge of the state of free speech in our nation as a whole. With apologies to H.L. Mencken, these dogmas are clear, simple, and wrong.

Dogma #1: Free Speech: With Us or Against Us

AGM insists that the debate about content moderation boils down to a single overriding divide. “The real issue,” he says—the issue “the consensus pro-censorship crowd will never directly address”—is this:

Do you think freedom of speech includes the right to say and believe obnoxious stupid shit that’s almost certainly false, or do you feel platforms have the responsibility to arbitrate truth and regulate online behavior for the sake of some supposed greater good?

That’s it. “If you think” that “dumb and even offensive speech” is “protected speech,” you’re “on the Elon [Musk] side of this debate.” Otherwise, you think that “platforms should be putting their fingers on the scales,” and you’re therefore on “the anti-Elon” side. As if to add an exclamation point, Antonio declares: “Some countries have real free speech, and some countries have monarchs on their coins.” (I’ve seen it said, in a similar vein, that all anyone “really” cares about is “political censorship,” and that that’s the key issue the “consensus pro-censorship crowd” won’t grapple with.)

Antonio presents a nice, neat dividing line. There’s the stuff no one likes—Antonio points to dick pics, beheading videos, child sexual abuse material, and hate speech that incites violence—and then there’s people’s opinions. All the talk of content moderation is just obfuscation—an elaborate effort to hide this clear line. “Quibbling over the precise content policy in the pro-content moderation view,” Antonio warns, “is just haggling over implementation details, and essentially ceding the field to that side of the debate.”

The logical positivists, too, wanted some nice, neat lines. Bear with me.

Like most philosophers, the LPs wanted to know what we can know. One reason arguments often go in circles, or bog down in confusion, is that humans make a lot of statements that aren’t so much wrong as simply meaningless. Many sentences don’t connect to anything in the real world over which a productive argument can be had. (Extreme example: “the Absolute enters into, but is itself incapable of, evolution and progress.”) The LPs wanted to separate the wheat (statements of knowledge) from the chaff (metaphysical gobbledygook, empty emotive utterances, tribal call signs, etc.). To that end, they came up with something called the verification principle.

In 1936 a brash young thinker named A.J. Ayer—the AGM of early twentieth century philosophy—published a crisp and majestic but (as Ayer himself later admitted) often mistaken book, Language, Truth & Logic, in which he set forth the verification principle in its most succinct form. Can observation of the world convince us of the likely truth or falsity of a statement? If so, the statement can be verified. And “a sentence,” Ayer argued, “says nothing unless it is empirically verifiable.” That’s it.

Problem: mathematics and formal logic seem to reveal useful—indeed, surprising—things about the world, but without adhering to the verification principle. In the LPs’ view, though, this was just a wrinkle. They postulated a distinction between good, juicy “synthetic” statements that can be verified, and drab old “analytic” statements that, according to (young) Ayer, are just games we play with definitions. (“A being whose intellect was infinitely powerful would take no interest in logic and mathematics. For he would be able to see at a glance everything that his definitions implied[.]”)

So the LPs had two dogmas: that a sentence either does or does not refer to immediate experience, and that a sentence can be analytic or synthetic. But as Quine explained in his paper, these pat categories are rubbish. He addressed the latter dogma first, raising a number of problems with it that aren’t worth getting into here. (For one thing, definitions are set by human convention; their “correct” use is open to empirical debate.) He then took aim at the verification principle—or, as he put it, the “dogma of reductionism”—itself.

The logical positivists went wrong, Quine observed, in supposing “that each statement, taken in isolation from its fellows, can admit of confirmation or infirmation.” It’s “misleading to speak of the empirical content of an individual statement,” he explained, because statements “face the tribunal of sense experience not individually but only as a corporate body.” There aren’t two piles of statements—those that can be verified and those that can’t. Rather, “the totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even pure mathematics and logic,” is a continuous “man-made fabric.” As we learn new things, “truth values have to be redistributed over some of our statements. Re-evaluation of some statements entails re-evaluation of others.” Our knowledge is not a barrel of apples that we go through, apple-by-apple, keeping the ripe ones and tossing the rotten. It is, in the words of philosopher Simon Blackburn, a “jelly of belief,” the whole of which “quiver[s] in reaction to ‘recalcitrant’ or surprising experience.”

See how this ties into content moderation? Steve Bannon was booted from Twitter because he said: “I’d put [Anthony Fauci’s and Christopher Wray’s] heads on pikes. Right. I’d put them at the two corners of the White House. As a warning to federal bureaucrats: Either get with the program or you’re gone.” Is this just an outlandish opinion—some “obnoxious stupid shit that’s almost certainly false”—or is it an incitement to violence? Why is this statement different from, say, “I’d put Gentle’s and Funshine’s heads on pikes . . . as a warning to the other Care Bears”?

When Donald Trump told the January 6 rioters, “We love you. You’re very special,” was that political speech? Or was it sedition? As with “heads on pikes,” the statement itself won’t answer that question for you. The same problem arises when Senate candidate Eric Greitens invites you to go “RINO hunting,” or when a rightwing pundit announces that the Consitution is “null and void.” And who says we must look at each piece of content in isolation? Say the Oath Keepers are prevalent on your platform. They’re not planning an insurrection right now; they’re just riling each other up and getting their message out and recruiting. Is this just (dumb) political speech? Or is it more like a slowly developing beheading video? (If a platform says, “Don’t care where you go, guys, but you can’t stay here,” is it time to put monarchs on our coins?)

Similar issues arise with harassment. Doxxing, deadnaming, coordinated pile-ons, racist code words, Pepe memes—all present line-drawing issues that can’t be resolved with appeals to a simple divide between bad opinions and bad behavior. In each instance, we have no choice but to “quibbl[e] over the precise content policy.” Disagreement will reign, moreover, because each of us will enter the debate with a distinct set of political, cultural, contextual, and experiential priors. To some people, Jordan Peterson deadnaming Elliot Page is obviously harassment. To others (including, I confess, myself), his doing so pretty clearly falls within the rough-and-tumble of public debate. But that disagreement is not, at bottom, about that individual piece of content; it’s about the entire panoply of clashing priors.

It’s great that we have acerbic polemicists like Antonio. I’m glad that he’s out there pushing his conception of freedom and decrying safety-ism. (He’s on his strongest footing, I suppose, when he complains about the labeling, “fact-checking,” and blocking of Covid claims.) I hope that he and his swashbuckling ilk never stop defending “our American birthright of constant and cantankerous rebellion against the status quo.” But it’s just not true that there’s a free speech crowd and a pro-censorship crowd and nothing in between. Content moderation is complicated and difficult, and people’s views about it sit on a continuum.

Dogma #2: The Public Square, Website-by-Website

Antonio’s other dogma is the view—held by many—that Twitter is in some meaningful sense the “public square.” Antonio has some pointed criticisms for those who believe that “Twitter isn’t the public forum, and as such shouldn’t be treated with the sacrosanct respect we typically imbue anything First Amendment-related.”

As the second part of that sentence suggests, AGM gets to his destination by an idiosyncratic route. He seems to think that, in other people’s minds, the public square is where solemn and civilized discussion of public issues occurs. But as Antonio points out, there’s never been such a place. We’re Americans; we’ve always hashed things out by shouting at each other. Today, one of the places where we shout at each other is on Twitter. Ergo, in Antonio’s mind, Twitter is the public square.

I don’t get it. “Everyone invoking some fusty idea of ‘debate’ or even a healthy ‘marketplace of ideas,’” Antonio writes, “is citing bygone utopias that never were, and never will be.” Who is this “everyone”? Anyway, just because there’s a place where debate occurs does not mean that that place is the “public square.” In 2019 Antonio was saying that we should break up Facebook because it has a “stranglehold” on “attention.” So why isn’t it the public square? Perhaps it’s both Twitter and Facebook? But then what about Substack—where AGM published his piece? What about the many podcast platforms that carry his conversations? What about Rumble and TikTok? Heck, what about Techdirt? The “public square”—if we really must go about trying to precisely define such a thing—is not Twitter but the Internet.

Antonio appeals to the “conditions our democracy was born in.” The “vicious, ribald, scabrous, offensive, and often violent tumult of the Founders’ era,” he notes, “makes modern Twitter look like a Mormon picnic by comparison.” This begs the question. Look at what Americans are saying on the Internet as a whole; it’s as vicious, ribald, scabrous, offensive, and violent as you please. If what matters is that our discourse resemble that of the founding era, we can rest easy. Ben Franklin’s brother used his publication, The New-England Courant, to rail against smallpox inoculation; modern anti-vaxxers use Gab to similar effect. James Callender used newspapers and pamphlets to viciously (but often accurately) attack Adams, Hamilton, and Jefferson; Matt Taibbi and Glenn Greenwald use newsletters and podcasts to viciously (but at times accurately) attack Joe Biden and Hillary Clinton. In his Porcupine’s Gazette, William Cobbett cried, “Professions of impartiality I shall make none”; the website American Greatness boasts about being called “a hotbed of far-right Trumpist nationalism.” Plus ça change . . .

Antonio says that we need “unfettered debate” in a “public square” that we “shar[e]” with “our despised political enemies.” Surveying the Internet, I’d say we have exactly that.

Now, I don’t deny that there’s a swarm of activists, researchers, academics, columnists, politicians, and government officials—not to mention the tech companies themselves—that make up what journalist Joe Bernstein calls “Big Disinfo.” Not surprisingly, the old gatekeepers of information, along with those who once benefited from greater information gatekeeping, are upset that social media allows information to bypass gates. “That the most prestigious liberal institutions of the pre-digital age are the most invested in fighting disinformation,” Bernstein submits, “reveals a lot about what they stand to lose, or hope to regain.” Indeed.

But so what? There’s a certain irony here. The people most convinced that our elite institutions are inept and crumbling are also the ones most concerned that those institutions will take over the Internet, throttle speech, and (toughest of all) reshape opinion—all, presumably, without violating the First Amendment. Are the forces of Big Disinfo really that competent? Please.

Antonio and I are both fans of Martin Gurri, whose 2014 book The Revolt of the Public is basically a long meditation on why Antonio’s “content-moderation regime” can’t succeed. “A curious thing happens to sources of information under conditions of scarcity,” Gurri proposes. “They become authoritative.” Thanks to the Internet, however, we are living through an unprecedented information explosion. When there’s information abundance, no claim is authoritative. Many claims must compete with each other. All claims (but especially elite claims) are questioned, challenged, and ridiculed. (In this telling, our current tumult is more vicious, ribald, etc., than that of the founding era.) Unable to shut down competing claims, elites can’t speak with authority. Unable to speak with authority, they can’t shut down competing claims.

Short of an asteroid strike, World War III, the rise of a thoroughgoing despotism, or some kind of Butlerian jihad, the flow of information can’t be stopped.

Filed Under: antonio garcia martinez, content moderation, free reach, free speech, public square

It’s seems to have become accepted wisdom by many — including policymakers — that social media is dangerous for kids. But every time we look at the details, the data is lacking. This is not for a lack of trying, of course. There have been tons of studies that try to make the link, but most of them fail to turn up anything significant. It’s not that there aren’t kids who are depressed and/or suicidal. There are. And many of them are on social media. Because basically all kids are these days. But making the link is what’s difficult.

Some of the people pushing this narrative just like to ignore inconvenient facts. Remember, of course, all the interest in Frances Haugen’s leaked files from inside Facebook/Instagram. The headline story was that Facebook knew that Instagram made girls feel worse about themselves. But the reality was that the study showed that it made a much higher percentage feel better about themselves.

But that version doesn’t make headlines.

Of course, one of the leading voices promoting this message is Jonathan Haidt, who has argued repeatedly that social media is leading to more depression and suicide. However, Taylor Barkley, over at the Center for Growth and Opportunity, recently had an interesting article that broke down the data. Haidt highlighted suicide statistics for teenagers from 2000 to 2020. And, from there, you see a slight, but noticeable uptick after 2015:

But... as Barkley notes, if you actually go past 2000, and look at the data from the CDC back to 1968, the story seems somewhat different:

In case the interactive embed is not working, here it is in image form:

So that's all kind of fascinating. Basically the suicide rate for teens was way higher in the late 80s and early 90s before dropping again in the late 90s and early 2000s. The current uptick is still mostly below the highs from a few decades ago, and obviously back then there really wasn't social media.

It's almost as if there might be something else going on. Social media may be having some impact here -- but the general data suggests there's likely a lot more at play, and a bunch of factors likely contribute to suicide rates.

And, as we've said before, social media is an easy scapegoat for politicians who don't want to do the work to understand the actual root causes of societal problems. This is why California is moving forward with its bill that states, upfront, that it's "proven" that social media is bad for kids. It's why next week the Senate will hold a hearing on the dangerous Kids Online Safety Act that would create universal surveillance of kids online, based on the unproven belief that social media is harming them.

These are being pushed by either deeply confused or deeply cynical politicians who simply don't want to actually do the work to figure out why suicide rates have ticked up recently, but are absolutely sure it must be social media.

There are real issues here. Figuring out why suicide rates for teens have gone up deserves actual careful study -- not grandstanding politicians with big ideas.

Filed Under: evidence, for the children, jonathan haidt, protect the children, suicide, suicide rates

Canada’s Bill C-11, which will hand the country’s broadcast regulator new powers to set rules for all kinds of online video and audio content, was rushed through an undemocratic sham of a “review” and then passed in the House of Commons by the reigning Liberal government. Now, it’s sitting in the Senate where the last hope of preventing it rests on Senators sticking to their assertion that they won’t be pressured by a government that is clearly intent on making it law without addressing any of the myriad serious concerns about what it would do. In the mean time, the office of the Heritage Minister (the driving force behind C-11) seems intent on continuing with the pattern it has established ever since the bill was first introduced as C-10 in 2020: ignoring or dismissing all critics, and insisting that the actual text of the bill doesn’t matter.

Instead, the government wants everyone to focus on their “policy intentions” — the things they say they hope the bill will achieve, and their repeated promises that it won’t do anything else. Nobody is supposed to care that these “intentions” don’t line up with what the bill actually contains, or that there are countless signals that these promises are false. Following a bit of a Twitter fight with an official from the Heritage Office, the University of Ottawa’s Michael Geist laid out a damning list of these contradictions:

My tweet thread response notes that the disconnect between the government’s professed intent and the actual text in Bill C-11 has been a persistent issue:

As Geist notes, all of these government promises could have been solidified in the actual bill with some clarifying amendments, many of which were among the more than 100 that were proposed — but instead the House of Commons rushed the clause-by-clause review of the bill and the voting on amendments at an absurd speed, imposing a completely unnecessary deadline that resulted in a near-total lack of debate and MPs voting on some amendments before the text had even been publicly released.

And so we find ourselves facing a bill that could usher in sweeping changes to the internet in Canada, impacting not just the big streaming platforms like Netflix that the government constantly insists are the real target, but just about every online media platform and the creators who use them. Who wants this bill? Certainly not Canadian content creators, and seemingly nobody except the government that is so intent on ramming it down the country’s throat.

But since the ruling Liberal party reached a deal that ensures them the near-unquestioning support of the left-wing NDP party in parliament, they seem intent on doing whatever they want when it comes to C-11, no matter how brazenly undemocratic. So all hope rests on the Senate, which refused to rush the bill through as C-10 last year and is famously labelled in Canada as the place of “sober second thought”, to block the bill or at least fix its most egregious problems. The advocacy group OpenMedia, which maintains an excellent FAQ about the problems with the bill, is calling on Canadians to let Senators know how important it is.

Filed Under: c-11, canada, content, streaming

If capturing a bird’s eye view of your favorite places is a fun way for you to unwind when you have some time, then the Vivitar VTI Phoenix Foldable Camera Drone (certified refurbished) is a great choice for updating your hobby’s capabilities. All the pieces come secured in the sided carrying case, which helps protect them from damage as well as keeps them neatly organized. The two included batteries allow for a combined flight time of over 32 minutes, so you can get the most out of your drone’s 1152p video camera video imaging. With a range of 2000 feet, Follow Me technology, GPS location locking, and Wi-Fi transmission capability, this drone has all the bells and whistles you need. It’s on sale for $125.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Based on (admittedly scattershot) case law, the best protection for your phone (and constitutional rights) seems to depend on whatever device owners feel is the most persistent (or dangerous) threat.

If you, a regular phone owner, feel the worst thing that could happen to you is the theft of your phone, then using biometric features to lock/unlock your device is probably the most secure option. It means thieves have to have access to both you and your phone if they hope to access far more sensitive data. And it makes even more sense if you’re one of the, oh, I don’t know… ~250 million Americans who occasionally reuse passwords. This prevents phone thieves from using a seemingly endless number of data breaches to find a way into your phone.

But if you feel law enforcement agencies are the more worrisome threat, it makes more sense to use a passcode. Why? Because courts have been far more willing to call the compelled production of passcodes the equivalent of testifying against yourself, resulting the rejection of warrant requests and the suppression of evidence.

And it’s not just criminals who may feel the cops are the worst. Activists, journalists, lawyers, security researchers… these are all people who may not want interloping cops to easily access the contents of their devices simply by mashing their faces, retinas, or fingerprints into their lockscreens.

So, since courts have decided (with rare exceptions) that utilizing biometric features is “non-testimonial,” that’s the option law enforcement officers will try to use first. As some courts see it, you get fingerprinted when you’re arrested, so applying a finger to a phone doesn’t seem to be enough of a stretch to bring the Constitution into it.

But to this point, the (compelled) deployment of biometric features has been used to unlock devices. In this case, first reported by Thomas Brewster for Forbes, the FBI went deeper: it secured a warrant allowing it to use a suspect’s face to unlock his Wickr account.

In November last year, an undercover agent with the FBI was inside a group on Amazon-owned messaging app Wickr, with a name referencing young girls. The group was devoted to sharing child sexual abuse material (CSAM) within the protection of the encrypted app, which is also used by the U.S. government, journalists and activists for private communications. Encryption makes it almost impossible for law enforcement to intercept messages sent over Wickr, but this agent had found a way to infiltrate the chat, where they could start piecing together who was sharing the material.

As part of the investigation into the members of this Wickr group, the FBI used a previously unreported search warrant method to force one member to unlock the encrypted messaging app using his face. The FBI has previously forced users to unlock an iPhone with Face ID, but this search warrant, obtained by Forbes, represents the first known public record of a U.S. law enforcement agency getting a judge’s permission to unlock an encrypted messaging app with someone’s biometrics.

As Brewster states, this is the first time biometric features have been used (via judicial compulsion) to unlock an encrypted service, rather than a device. No doubt this will be challenged by the suspect’s lawyer. And, speaking of lawyers, the FBI really wanted this to go another way, but was apparently inconvenienced by someone willing to protect their arrestee’s rights.

Just in case it’s not perfectly clear, law enforcement agencies will do everything they can to bypass a suspect’s rights and often only seem to be deterred by the arrival of someone who definitely knows the law better than they do. I mean, it’s right there in the affidavit [PDF]:

By the time it was made known to the FBI that facial recognition was needed to access the locked application Wickr, TERRY had asked for an attorney.

Therefore, the United States seeks this additional search warrant seeking TERRY’ s biometric facial recognition is requested to complete the search of TERRY’s Apple iPhone, 11.

It looks like the FBI only decided to seek a warrant because the suspect had requested legal counsel. It’s unlikely seeking a warrant was in the cards before the suspect asked for an attorney. The FBI had plenty of options up to that point: using a 302 report to create an FBI-centric narrative, lying to the suspect about evidence, co-defendants (or whatever), endless begging for consent, or simply pretending there was no unambiguous assertions of rights. It was only the presence of the lawyer that forced the FBI to acknowledge the Constitution existed, even if its response was to roll the dice on Fifth Amendment jurisprudence.

This dice roll worked. But it’s sure to be challenged. There’s not enough settled law to say the FBI was in the right, even with a warrant. What’s on the line is the Fifth Amendment itself. And if passcodes can’t be compelled, then biometric features should be similarly protected, since they both accomplish the same thing: the production of evidence the government hopes to use against the people whose compliance it has managed to compel.

Filed Under: 4th amendment, 5th amendment, biometrics, doj, facial recognition, phones

The US has always had a fairly pathetic definition of “broadband.”

Originally defined as anything over 200 kbps in either direction, the definition was updated in 2010 to a pathetic 4 Mbps down, 1 Mbps up. It was updated again in 2015 by the FCC to a better, but still arguably pathetic 25 Mbps downstream, 3 Mbps upstream. As we noted then, the broadband industry whined incessantly about having any higher standards, as it would only further highlight industry failure, the harm of monopolization, and a lack of competition.

Unfortunately for them, pressure has only grown to push the US definition of broadband even higher.

In 2021, a coalition of Senators wrote the Biden administration to recommend that 100 Mbps in both directions become the new baseline. After some lobbying by cable and wireless companies (whose upstream speeds couldn’t match that standard), FCC boss Jessica Rosenworcel last week proposed a new standard: 100 Mbps downstream 20 Mbps up.

“The 25/3 metric isn’t just behind the times, it’s a harmful one because it masks the extent to which low-income neighborhoods and rural communities are being left behind and left offline. That’s why we need to raise the standard for minimum broadband speeds now and while also aiming even higher for the future, because we need to set big goals if we want everyone everywhere to have a fair shot at 21st century success.”

It’s worth noting that the $42+ billion in broadband subsidies coming as part of the Infrastructure Bill already had affixed this higher 100/20 standard. But this shift would still be helpful in further determining which parts of the country remain stuck on dated DSL and cable technologies, applying some pressure on fiber-investment phobic companies ill-prepared for the Zoom era.

The problem: it’s not clear the FCC has the votes to actually make this happen. The telecom industry has intentionally gridlocked the agency at 2-2 commissioners with its protracted lobbying assault on the appointment of Gigi Sohn.

They don’t want the FCC to implement widely popular reforms like the restoration of net neutrality and media consolidation rules. And they sure as hell don’t want anything that further amplifies the negative impact of letting regional monopolies run amok for thirty straight years.

Commissioners Simington and Carr, both Trump appointments, generally vote in lockstep with the telecom industry, which, again, strictly opposes any policy reforms that might highlight market failure, substandard speeds and deployments, or the pretty obvious impact of monopolization.

If we’re lucky, companies like AT&T and Comcast think the higher standard is inevitable, have given up on fighting the higher standard, and won’t push Carr and Simington to oppose it. But the very fact that it’s ultimately up to telecom monopolies to determine what policy reform occurs in a purported democracy pretty much speaks for itself.

Filed Under: broadband, definition, digital divide, fcc, high speed internet, Mbps, monopoly, telecom

This feature is only available to registered users. You can register here or sign in to use it.

uration=200)" class="scrollToTop">Top